Is a naturalistic account of reason compatible with its objectivity?

Can rational objectivism be implemented in a connectionist system (like the brain)?

Greg Detre

Tuesday, January 29, 2002

Dr Tasioulas

 

Introduction

At root, connectionism amounts to the thesis that the brain is a dynamical system, a mathematically modellable complex of levers and pulleys, or in this case, neurons and synapses. The high-level behaviour of the system seems to emerge like magic out of a morass of low-level interactions, just as the seemingly-centralised wheeling and coordination of a flock of birds results from each bird paying attention to the position and speed of its neighbours (local rules).

Define connectionism

More specifically, connectionism refers to the family of theories that aim to understand mental abilities in terms of formalised models of the brain. These usually employ large numbers of nodes (neurons), with weighted inter-connections (synapses). The firing rate of a neuron is usually some non-linear function (e.g. sigmoid) of its activity, which is calculated as the weighted sum of the firing rates of neurons that synapse onto it. In this way, activity is propagated in parallel from the input neurons eventually to the output neurons.

Input neurons are defined as those whose activation is (at least partially) determined by the external environment (in the case of the brain, various sensory receptors), and output neurons are those which affect some change in the system�s behaviour in that environment (e.g. motor neurons connected to muscle) � hidden neurons are those whose activity is invisible to the environment.

What makes neural networks interesting is their ability to self-organise, or �learn�, by modifying their weights according to a learning algorithm. The simplest are the Hebbian-type learning rules[1], which are based on the principle:

the synapse between two neurons should be strengthened if the neurons fire simultaneously

This can be implemented in a pattern-associator, an architecture for associating a set of input patterns with a set of pre-specified output patterns. Innumerable improvements and revisions have been employed, and the Hebbian rule really only works well for orthogonal (i.e. uncorrelated) input patterns, but its human-like robustness and ability to generalise are notable. When presented with a novel pattern which is similar but not identical to a learned input pattern, its output will be similar or identical to the learned output pattern. It can be seen to generalise to new data, and form prototypes based on resemblances between input patterns, both of which features had to be explicitly, inelegantly and inefficiently built into previous symbolic models.

Biological plausibility

The field is polarised by the degree to which these �artificial neural networks� are intended to be biologically-plausible, that is, the degree to which a model corresponds to actual processes instantiated in the brain.

This debate centres around the use of powerful, but biologically unrealistic learning rules like back-propagation.

Level of computation

Some neuron-level models aim to simulate the processes going on inside a neuron to an almost molecular level, while others ignore the sub-neural computation, treating neurons as just simple devices for integrating inputs.

Relate rationality to the LOTH

 

Systematicity

 

Productivity etc.

 

Problems for connectionism

There are various reasons why an arch-rationalist like Nagel might have concerns about Smolensky�s PTC.

 

 

Why do connectionist systems seem unsuitable for implementing rationality???

analogue (probabilistic/statistical???)

we don't evolve representations

systematicity (cf Chomsky�s argument about combinatorial explosion etc.???)

generality (rationality seems to be able to work in more or less any domain)

Penrose � non-computable human mental functioning

 

Is there hope that a connectionist system could implement rationality???

ah, but that�s where the low-level non-linearity resolves into more or less discrete results at a high level

the DNA may constrain things into representations, allowing the interactionist to more or less posit innate ideas

systematicity possible in connectionist systems

generality???

Penrose???

parallels between symbolic approaches and rationality

a connectionist system is a Universal Turing machine, and so could just be the hardware implementation of a symbolic rational mode

symbolic-only approaches have made little progress in modelling any aspect of the mind, including the sort of broad rationality we�re talking about

rationality seems so intimately tied to things like creativity and analogy that maybe it requires some sort of sub-symbolic system

like evolution, connectionism is about self-organisation, and so gives an alternative demonstration of how our minds could be so adapted to our environment

 

Discarded - connectionism

Understanding

John Searle raised the debate of understanding with relation to purely syntactic processing memorably and eloquently in his discussions of the Chinese Room. I think he has been firmly rebuked even by the interlocutors in his original paper (�Minds, brains and programs�), and will not

Smolensky and the cognitive level

The term, �connectionism� is used as a thesis about the workings of the mind in two different ways, one making a much stronger claim than the other.

  1. The stronger claim, as espoused by Smolensky, can be stated negatively: a symbolic, cognitive-level description cannot fully capture (i.e. specify in law-like terms) our mental activity. That is, if we want to fully understand (i.e. account for or predict) the workings of the mind, we cannot talk at the level of psychology, but must (at least partially) descend towards the neural level. Smolensky maintains that a sub-symbolic level consisting of non-semantically evaluable constituents or micro-features of symbols exists, above the neural level, at which we will be able to fully specify (i.e. capture nomologically) mental activity.
  2. The weaker claim

 

does the weaker functionalism connectionist claim have anything to say about the symbolic/sub-symbolic debate???

do I actually need to talk about strong and weak claims at all???

 

Following Smolensky�s �Proper Treatment of Connectionism� (PTC),

 

Points

The weaker claim simply states that the brain is a self-organising connectionist system � it is composed, at the neural level, of nodes with weighted connections. Sensory input is transduced into action potentials, propagated and processed, and eventually transduced into muscle activity. As mentioned briefly above, there is some debate about the extent to which sub-neuronal processes play a computational role. In effect, this is little more than a fleshed-out reiteration of functionalism.

Kim Plunkett stuff

Could a connectionist system (even one as complex as the brain) ever be truly rational???

In two ways, this is a stupid question. On the one hand, how can anyone know? � our current NN efforts are so feeble in comparison to human rationality. On the other, humans appear rational (questionably), and we have connectionist brains, so we must be. Well, Nagel for one is prepared to argue that our current conception of mind almost certainly needs to undergo at least one paradigm shift before we can make sense of problems like the mind-body problem and how we can have access to such �universally valid methods of objective thought�.

Won�t there always be a probabilistic aspect to its computation that would make it fallible or non-rational to some extent, i.e. rational 99.9% of the time???

An inherent part of true rationality for Nagel is its generality:

Our aim as thinkers and rational agents is to arrive at principles that are �universal and exceptionless� � to be able to come up with reasons that apply in all relevantly similar situations, and to have reasons of similar generality that tell us when situations are relevantly similar.

Can a connectionist system ever be generally rational, since its training data will always be limited, and so its synaptic organisation will be geared towards that

 

Is rationality adaptive??? it seems clear that having true beliefs may well be more expensive and less fitness-enhancing than having useful beliefs, and so much less likely to evolve.

Following on from this, Robert Nozick (developing from Cosmides & Tooby and others) has an interesting idea that we have evolved to find certain chains of inference automatic and self-evident, i.e. that there may be hard-wired, specialised inferential mechanisms for common past situations that have been selected for. Thus, for example, the list of philosophical problems we've been least successful with all mark assumptions that evolution has built into us: the problem of induction, of other minds, of the external world, of justifying rationality etc. These seem to me to be just the sort of genetically pre-wired neural representations that are argued against in Rethinking Innateness.

The idea that the brain is implementing formal logic in some hidden way isn�t very popular now, but philosophers seem to favour the idea that certain, fairly specific ideas could be genetically coded. You argue against that in Rethinking Innateness, but is it possible that a tiny proportion of the genome does hard-code a handful of vital neural representations???

 

Panicked new attempt Monday, February 04, 2002 1:44 AM

Nozick�s account is attractive in a number of ways. It can be accommodated with minimal metaphysical commitments,

Its price is that it does not really face Nagel head on � Nozick is content to admit that he is not explaining rationality �from first principles�(???) � he is presupposing a degree of rationality in order to consider oneself rationally. And, as I will discuss later, this is the only position that I think we can take as philosophers. On the one hand, we face an empty, skeptical suspension of belief since we recognise that in order to hold any justified beliefs whatsoever, we first require a justified belief about our ability to form such beliefs. And yet, in suspending our belief, we have already recognised that this is the only rational option. In this way, Nagel�s characterisation of �thoughts that we cannot get outside of� is particularly appropriate. Indeed, if anything I think he fails to recognise just how inescapable these thoughts are, and the extent to which they underly absolutely all thought, that all thought is rational, whether pro-rational, anti-rational or simply neutral. We cannot truly survey ourselves thinking, except by thinking.

In a way, it�s obvious that we could never monitor our entire brain � with what would we be doing the monitoring? Where can we stand that we can view our position from any position but our own? Can we turn our eyes back upon our own skull (in a more meaningful sense than just the eyeball-rolling party trick)?

So we have little choice but to accept that simply being able to frame the question of one�s own rationality is a sort of base condition for rationality. Doubting is, of necessity, a kind of rational thinking. Descartes� cogito may thus serve instead to bootstrap us into knowledge of our own rationality.

Perhaps it�s not so much the doubting about questioning one�s own rationality, as simply being able to conceive of rationality at all. Perhaps the complex notion of rationality is its own key. Being able to conceive abstractly of context-independent, formal, generalisable methods and propositions, or perhaps the notions of context-independence, formality, generalisability, method or proposition collectively form the tip of a cognitive framework iceberg comprising a syntax-manipulating, representation-of-representation mind, even a fallible, specialised, evolved one.

 

Define functionalism

I am going to contend that some variant on these claims will remain the dominant way of thinking about the mind and brain for the foreseeable future, and that this should inform our understanding of rationality in a number of ways. To some degree, adherence to this picture narrows down what we can be capable of as connectionist/functionalist-implemented rationalists - most notably, it serves as a constant reminder of our finitude (see Cherniak???).

At the same time though, it may helpfully flesh out our conception of ourselves as rational beings, partly by restricting or constraining the number and type of possible explanations, and partly by providing a good idea of the sort of properties we should expect to find.

Hopefully, considering ourselves as connectionist-rationalists might give us a new approach to the problem of alternate rationalities (i.e. 'conceptual schemes'). I am not thinking of 'multiple realisablity' here - this is the term that functionalists use to mean that the same abstract organisation, the same underlying function, and so the same mental abilities, could be implemented in physically very different systems (e.g. a silicon chip could be functionally identical to a biological brain). Rather, I am thinking of the low-level differences between the brain of every human on the planet, despite being very similar macroscopically. In terms of the actual computation being performed, nobody thinks in exactly the same way. It is an empirical question how similar our brains are - but it is certainly clear that mapping an area from one brain to the corresponding location in another brain is far from easy (as neuroimaging researchers constantly find). It may be that these differences amount to more or less identical computational processes at a higher level. One might imagine such functionally irrelevant differences as being analogous to the difference between, say, a + (b + c) and (a + b) + c. Perhaps, if we were able to say how people�s brains differ in terms of the computations being performed, we might eventually begin to trace a broad schema of computational approaches which qualify as rational, to a greater or lesser degree. In fact, a growing number of approaches seek an understanding of the mind in terms of numerous interacting components, moving away from the �monolithic internal models, monolithic control, and general purpose processing� of �classical AI� (Brooks et. al (MIT), Dennett�s multiple drafts, Fodor�s modules).

 

There are certain problems with trying to square connectionism with rationality. Some are relatively general difficulties with connectionism, in its various forms. Others stem from a seeming incompatibility between the two.

Perhaps the broadest criticism of all such approaches stems from Godel's theorem, most famously advocated with relation to the mind-body problem by Lucas, and more recently, Penrose. Godel's theory states that in a formal system of above a certain complexity, there will always be formally-undecidable, true propositions, i.e. statements that are true, but which cannot be proved within the system. This thwarted attempts like Russell and Whitehead's Principia Mathematica to found the whole of mathematics on a minimal set of principles (axioms). It also poses problems for connectionist systems. Part of the appeal of a connectionist system is that it can be seen as a Universal Turing Machine. Consequently though, formally non-computable functions cannot be implemented finitely by such a system. Penrose argues that the brain (i.e. people) *can* do this, and so our minds must be more than Turing machines. As he argues, there must be more going on in the brain than we're currently aware of at the neural or even sub-neural level - he speculates that there may be quantum effects in microtubules in the brain that allow us to ... If Penrose is right, then almost all of the debate currently centring around the capabilities of purely connectionist systems becomes almost irrelevant, because the power of such a quantum system could potentially be of an unimaginably greater magnitude. The first questions would relate to what limitations such a system would have, and why our brains are so much more limited-seeming than one would expect of such a system.

There are a number of related issues specific to connectionism to consider. To what extent could a connectionist system be as general in its applicability as Nagel�s rationality requires? When we reason, or indeed form a sentence, we relate a series of symbols (whether words, propositions, names etc.???) inter-changeably together by syntax � although connectionist models can be trained to be systematic, they can also be trained, for example, to recognize �John loves Mary� without being able to recognize �Mary loves John� (the problem of �systematicity�). When a connectionist system represents a proposition as a vector of synaptic weights, is it really understanding the proposition?

To an extent, these parallel the debate in evolutionary epistemology about the extent to which true beliefs are adaptive, and that truth-tracking could have been selected for.

 

 

The idea of the brain as a rational machine brings up the two related issues of discreteness and generalisation.

The brain is a more or less analogue system. It is a dynamical system operating in real time (as opposed to discrete time-steps), based on continuous variables like membrane voltage potential, synaptic weight strength etc. (although admittedly at the atomic level, the quantity of neurotransmitter at a given synapse is discrete, but this is a moot point). It seems intuitive that since the computations being performed by the system are analogue, and the outputs also analogue, that a neural system could not give discrete responses � at best, the system might respond with a very high tendency in one direction or another, but the neurons are not binary, and do not give �true� or �false� answers, only high or low firing rates. As a result, the sort of binary formal logic that mathematicians, logicians and rationalists employ seems inappropriate for such a system.

Computational models have demonstrated that simple logic gates (like AND or OR) can be easily simulated by neural networks. Indeed, much more complicated functions can be replicated too. However, these might be considered to be misleadingly simple cases, since the number of possible permutations is small enough to be contained inside the training set. The system can learn, like a finite state machine, a set of prescribed absolute responses for the given input patterns. This is clearly not an option for most problems. One of the major strengths of a connectionist system is that it can generalise. It forms prototypes from the data, and is able to gauge the similarity between given patterns. As a result, it is able to respond appropriately to novel patterns. This is the property that gives rise to �graceful degradation�. Connectionist systems, unlike the programs running on most desktop computers today, are robust. By this, I mean that unexpected, erroneous or corrupt data does not bring the system to its knees. If you feed a neural network damaged or incomplete data, it will settle into the closest attractor available, based on the weight organisation that has arisen from its training.

A crucial aspect of a connectionist system�s dynamics relates to its non-linearity. By this I mean that the activation function relating its current activity is non-linear. This could be a simple binary function, a threshold linear model, sigmoid or logarithmic. All that matters is that it is not simply linear. This non-linearity gives rise to peculiar dynamics at a high-level, i.e. ensembles of neurons collectively forming a distributed representation, which can begin to seem more and more discrete.

Rationality, in order to �arrive at principles that are universal and exceptionless � to be able to come up with reasons that apply in all relevantly similar situations, and to have reasons of similar generality that tell us when situations are relevantly similar�, seems to require too much of a network. In a way, this is an empirically question: �Is the data set to which our brains have been exposed sufficiently broad and representative for us to be able to reason reliably about the areas to which we apply it?� It requires an implausible stretch of the imagination to explain how our senses could provide the data by means of which we could learn to reason mathematically or logically.

At this point, we have to remember a very obvious point: people�s reasoning improves. This is not simply a point about developmental psychology. Clearly, our brains are undergoing various genetically-timed stages of progression, especially during our earliest years, initially forming an enormous profusion of synaptic connections that are subsequently pruned. This is not what I am really referring to � as we progress through education, even long beyond the point at which our brains are undergoing developmental (i.e. internally-prescribed) changes, our ability to reason improves. We are continually forming new conceptual spaces, and this improvement is incremental. This is related to the reason that maths, for instance, requires an element of trudging practice that cannot be avoided. An essential part of learning a new theory or technique is practicing it, repeatedly, with different problems. In this way, we are expanding our set of training data to be more representative of a given problem domain, and in the process expanding the generalisation ability of our reasoning.

 

 

Conclusions

If we�re as charitable as possible to our connectionist account by saying that it wholly contains the symbolic approach within it, since you can always implement the symbolic approach as a connectionist system (officially � ignoring learning issues aside). Similarly, Smolensky has shown that systematicity is not, in principle a insurmountable problem at all.

I think that a connectionist system, especially an evolved, specialised, jury-rigged one like ours, will always be a little unpredictable in its behaviour. We form representations

 

Discarded

Most computational models are conducted on serial machines which can approximate, at a comparatively coarse temporal resolution, hugely simplified neuron-models.

Questions

do functionalists have to be materialists??? yes, to the extent that they hold that mental activity is describable as a (possibly complex, non-linear, hidden-representation etc.) *function*, that is to say, there is a determinate (and by implication, though perhaps not for sure, determinable) relationship between (sensory) inputs and (motor) outputs. In order to relate material inputs to material outputs, as is required by the body, then the functionalist pretty much has to think in terms of a material implementation of the intermediate function

 

rational objectivism is a bad name for what i'm talking about - i'm not talking about fabric-of-the-universe(???) norms, so much as our rational capacities

������ in doing this, and phrasing it like this, am i first assuming fabric-of-the-universe norms, and how does this change and straitlace me, and relate to truth etc.???

 

why do 'states' matter so much to the functionalist???

������ is this anything to do with the fact that a Turing machine needs states for its computational properties???

 

is the a + b + c example a good one???

 

to what extent is rationality a continuum???

what am i trying to do???

terminology!!!

 

is connectionism really part of our naturalistic framework???

y, surely � it just depends whether we�re talking about the strong or weak claim

in order for the discrete-like non-linearity to emerge, does it have to be a distributed representation, or is the non-linearity (e.g. of the activation function) enough???

is the fact that the brain is self-organising part of the generalisation/systematicity issue???

are the problems of generality and systematicity related to the difficulties of formal, context-independent symbols emerging in a connectionist systems???

I suppose you can imagine that a neural net which has learned to add two numbers together - but do those numbers have to be of a certain size � well, they have to be representable within the input vector � is that different from the way that we have limits on the size of numbers we can hold in our heads??? well, maybe, but that�s why we have algebra

 

 



[1] Hebb, D.O. (1949). The organization of behavior. New York: Wiley